124 research outputs found

    Picosecond timing of Microwave Cherenkov Impulses from High-Energy Particle Showers Using Dielectric-loaded Waveguides

    Full text link
    We report on the first measurements of coherent microwave impulses from high-energy particle-induced electromagnetic showers generated via the Askaryan effect in a dielectric-loaded waveguide. Bunches of 12.16 GeV electrons with total bunch energy of ∼103−104\sim 10^3-10^4 GeV were pre-showered in tungsten, and then measured with WR-51 rectangular (12.6 mm by 6.3 mm) waveguide elements loaded with solid alumina (Al2O3Al_2 O_3) bars. In the 5-8 GHz TE10TE_{10} single-mode band determined by the presence of the dielectric in the waveguide, we observed band-limited microwave impulses with amplitude proportional to bunch energy. Signals in different waveguide elements measuring the same shower were used to estimate relative time differences with 2.3 picosecond precision. These measurements establish a basis for using arrays of alumina-loaded waveguide elements, with exceptional radiation hardness, as very high precision timing planes for high-energy physics detectors.Comment: 16 pages, 15 figure

    Managing plagiarism in programming assignments with blended assessment and randomisation.

    Get PDF
    Plagiarism is a common concern for coursework in many situations, particularly where electronic solutions can be provided e.g. computer programs, and leads to unreliability of assessment. Written exams are often used to try to deal with this, and to increase reliability, but at the expense of validity. One solution, outlined in this paper, is to randomise the work that is set for students so that it is very unlikely that any two students will be working on exactly the same problem set. This also helps to address the issue of students trying to outsource their work by paying external people to complete their assignments for them. We examine the effectiveness of this approach and others (including blended assessment) by analysing the spread of similarity scores across four different introductory programming assignments to find the natural similarity i.e. the level of similarity that could reasonably occur without plagiarism. The results of the study indicate that divergent assessment (having more than one possible solution) as opposed to convergent assessment (only one solution) is the dominant factor in natural similarity. A key area for further work is to apply the analysis to a larger sample of programming assignments to better understand the impact of different features of the assignment design on natural similarity and hence the detection of plagiarism

    Distributed-Pair Programming can work well and is not just Distributed Pair-Programming

    Full text link
    Background: Distributed Pair Programming can be performed via screensharing or via a distributed IDE. The latter offers the freedom of concurrent editing (which may be helpful or damaging) and has even more awareness deficits than screen sharing. Objective: Characterize how competent distributed pair programmers may handle this additional freedom and these additional awareness deficits and characterize the impacts on the pair programming process. Method: A revelatory case study, based on direct observation of a single, highly competent distributed pair of industrial software developers during a 3-day collaboration. We use recordings of these sessions and conceptualize the phenomena seen. Results: 1. Skilled pairs may bridge the awareness deficits without visible obstruction of the overall process. 2. Skilled pairs may use the additional editing freedom in a useful limited fashion, resulting in potentially better fluency of the process than local pair programming. Conclusion: When applied skillfully in an appropriate context, distributed-pair programming can (not will!) work at least as well as local pair programming

    An intuitive Python interface for Bioconductor libraries demonstrates the utility of language translators

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Computer languages can be domain-related, and in the case of multidisciplinary projects, knowledge of several languages will be needed in order to quickly implements ideas. Moreover, each computer language has relative strong points, making some languages better suited than others for a given task to be implemented. The Bioconductor project, based on the R language, has become a reference for the numerical processing and statistical analysis of data coming from high-throughput biological assays, providing a rich selection of methods and algorithms to the research community. At the same time, Python has matured as a rich and reliable language for the agile development of prototypes or final implementations, as well as for handling large data sets.</p> <p>Results</p> <p>The data structures and functions from Bioconductor can be exposed to Python as a regular library. This allows a fully transparent and native use of Bioconductor from Python, without one having to know the R language and with only a small community of <it>translators</it> required to know both. To demonstrate this, we have implemented such Python representations for key infrastructure packages in Bioconductor, letting a Python programmer handle annotation data, microarray data, and next-generation sequencing data.</p> <p>Conclusions</p> <p>Bioconductor is now not solely reserved to R users. Building a Python application using Bioconductor functionality can be done just like if Bioconductor was a Python package. Moreover, similar principles can be applied to other languages and libraries. Our Python package is available at: <url>http://pypi.python.org/pypi/rpy2-bioconductor-extensions/</url></p

    Open Science in Software Engineering

    Full text link
    Open science describes the movement of making any research artefact available to the public and includes, but is not limited to, open access, open data, and open source. While open science is becoming generally accepted as a norm in other scientific disciplines, in software engineering, we are still struggling in adapting open science to the particularities of our discipline, rendering progress in our scientific community cumbersome. In this chapter, we reflect upon the essentials in open science for software engineering including what open science is, why we should engage in it, and how we should do it. We particularly draw from our experiences made as conference chairs implementing open science initiatives and as researchers actively engaging in open science to critically discuss challenges and pitfalls, and to address more advanced topics such as how and under which conditions to share preprints, what infrastructure and licence model to cover, or how do it within the limitations of different reviewing models, such as double-blind reviewing. Our hope is to help establishing a common ground and to contribute to make open science a norm also in software engineering.Comment: Camera-Ready Version of a Chapter published in the book on Contemporary Empirical Methods in Software Engineering; fixed layout issue with side-note

    A comparison of common programming languages used in bioinformatics

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The performance of different programming languages has previously been benchmarked using abstract mathematical algorithms, but not using standard bioinformatics algorithms. We compared the memory usage and speed of execution for three standard bioinformatics methods, implemented in programs using one of six different programming languages. Programs for the Sellers algorithm, the Neighbor-Joining tree construction algorithm and an algorithm for parsing BLAST file outputs were implemented in C, C++, C#, Java, Perl and Python.</p> <p>Results</p> <p>Implementations in C and C++ were fastest and used the least memory. Programs in these languages generally contained more lines of code. Java and C# appeared to be a compromise between the flexibility of Perl and Python and the fast performance of C and C++. The relative performance of the tested languages did not change from Windows to Linux and no clear evidence of a faster operating system was found.</p> <p>Source code and additional information are available from <url>http://www.bioinformatics.org/benchmark/</url></p> <p>Conclusion</p> <p>This benchmark provides a comparison of six commonly used programming languages under two different operating systems. The overall comparison shows that a developer should choose an appropriate language carefully, taking into account the performance expected and the library availability for each language.</p

    Choosing Code Segments to Exclude from Code Similarity Detection

    Get PDF
    When student programs are compared for similarity as a step in the detection of academic misconduct, certain segments of code are always sure to be similar but are no cause for suspicion. Some of these segments are boilerplate code (e.g. public static void main String [] args) and some will be code that was provided to students as part of the assessment specification. This working group explores these and other types of code that are legitimately common in student assessments and can therefore be excluded from similarity checking. From their own institutions, working group members collected assessment submissions that together encompass a wide variety of assessment tasks in a wide variety of programming languages. The submissions were analysed to determine what sorts of code segment arose frequently in each assessment task. The group has found that common code can arise in programming assessment tasks when it is required for compilation purposes; when it reflects an intuitive way to undertake part or all of the task in question; when it can be legitimately copied from external sources; and when it has been suggested by people with whom many of the students have been in contact. A further finding is that the nature and size of the common code fragments vary with course level and with task complexity. An informal survey of programming educators confirms the group's findings and gives some reasons why various educators include code when setting programming assignments.Peer reviewe

    Experimental tests of sub-surface reflectors as an explanation for the ANITA anomalous events

    Full text link
    The balloon-borne ANITA experiment is designed to detect ultra-high energy neutrinos via radio emissions produced by an in-ice shower. Although initially purposed for interactions within the Antarctic ice sheet, ANITA also demonstrated the ability to self-trigger on radio emissions from ultra-high energy charged cosmic rays interacting in the Earth's atmosphere. For showers produced above the Antarctic ice sheet, reflection of the down-coming radio signals at the Antarctic surface should result in a polarity inversion prior to subsequent observation at the ∟\sim35-40 km altitude ANITA gondola. ANITA has published two anomalous instances of upcoming cosmic-rays with measured polarity opposite the remaining sample of ∟\sim50 UHECR signals. The steep observed upwards incidence angles (25--30 degrees relative to the horizontal) require non-Standard Model physics if these events are due to in-ice neutrino interactions, as the Standard Model cross-section would otherwise prohibit neutrinos from penetrating the long required chord of Earth. Shoemaker et al. posit that glaciological effects may explain the steep observed anomalous events. We herein consider the scenarios offered by Shoemaker et al. and find them to be disfavored by extant ANITA and HiCal experimental data. We note that the recent report of four additional near-horizon anomalous ANITA-4 events, at >3σ>3\sigma significance, are incompatible with their model, which requires significant signal transmission into the ice

    Experimental tests of sub-surface reflectors as an explanation for the ANITA anomalous events

    Get PDF
    The balloon-borne ANITA [1] experiment is designed to detect ultra-high energy neutrinos via radio emissions produced by in-ice showers. Although initially purposed for interactions within the Antarctic ice sheet, ANITA also demonstrated the ability to self-trigger on radio emissions from ultra-high energy charged cosmic rays [2] (CR) interacting in the Earth's atmosphere. For showers produced above the Antarctic ice sheet, reflection of the down-coming radio signals at the Antarctic surface should result in a polarity inversion prior to subsequent observation at the ~35–40 km altitude ANITA gondola. Based on data taken during the ANITA-1 and ANITA-3 flights, ANITA published two anomalous instances of upcoming cosmic-rays with measured polarity opposite the remaining sample of ~50 UHECR signals [3, 4]. The steep observed upwards incidence angles (25–30 degrees relative to the horizontal) require non-Standard Model physics if these events are due to in-ice neutrino interactions, as the Standard Model cross-section would otherwise prohibit neutrinos from penetrating the long required chord of Earth. Shoemaker et al. [5] posit that glaciological effects may explain the steep observed anomalous events. We herein consider the scenarios offered by Shoemaker et al. and find them to be disfavored by extant ANITA and HiCal experimental data. We note that the recent report of four additional near-horizon anomalous ANITA-4 events [6], at >3σ significance, are incompatible with their model, which requires significant signal transmission into the ice

    Fast relational learning using bottom clause propositionalization with artificial neural networks

    Get PDF
    Relational learning can be described as the task of learning first-order logic rules from examples. It has enabled a number of new machine learning applications, e.g. graph mining and link analysis. Inductive Logic Programming (ILP) performs relational learning either directly by manipulating first-order rules or through propositionalization, which translates the relational task into an attribute-value learning task by representing subsets of relations as features. In this paper, we introduce a fast method and system for relational learning based on a novel propositionalization called Bottom Clause Propositionalization (BCP). Bottom clauses are boundaries in the hypothesis search space used by ILP systems Progol and Aleph. Bottom clauses carry semantic meaning and can be mapped directly onto numerical vectors, simplifying the feature extraction process. We have integrated BCP with a well-known neural-symbolic system, C-IL2P, to perform learning from numerical vectors. C-IL2P uses background knowledge in the form of propositional logic programs to build a neural network. The integrated system, which we call CILP++, handles first-order logic knowledge and is available for download from Sourceforge. We have evaluated CILP++ on seven ILP datasets, comparing results with Aleph and a well-known propositionalization method, RSD. The results show that CILP++ can achieve accuracy comparable to Aleph, while being generally faster, BCP achieved statistically significant improvement in accuracy in comparison with RSD when running with a neural network, but BCP and RSD perform similarly when running with C4.5. We have also extended CILP++ to include a statistical feature selection method, mRMR, with preliminary results indicating that a reduction of more than 90 % of features can be achieved with a small loss of accuracy
    • …
    corecore